90 research outputs found

    Une approche d'ingénierie ontologique pour l'acquisition et l'exploitation des connaissances à partir de documents textuels : vers des objets de connaissances et d'apprentissage

    Full text link
    Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal

    FICLONE: Improving DBpedia Spotlight Using Named Entity Recognition and Collective Disambiguation

    Get PDF
    In this paper we present FICLONE, which aims to improve the performance of DBpedia Spotlight, not only for the task of semantic annotation (SA), but also for the sub-task of named entity disambiguation (NED). To achieve this aim, first we enhance the spotting phase by combining a named entity recognition system (Stanford NER ) with the results of DBpedia Spotlight. Second, we improve the disambiguation phase by using coreference resolution and exploiting a lexicon that associates a list of potential entities of Wikipedia to surface forms. Finally, to select the correct entity among the candidates found for one mention, FICLONE relies on collective disambiguation, an approach that has proved successful in many other annotators, and that takes into consideration the other mentions in the text. Our experiments show that FICLONE not only substantially improves the performance of DBpedia Spotlight for the NED sub-task but also generally outperforms other state-of-the-art systems. For the SA sub-task, FICLONE also outperforms DBpedia Spotlight against the dataset provided by the DBpedia Spotlight team

    An Integrated Approach for Automatic\ud Aggregation of Learning Knowledge Objects

    Get PDF
    This paper presents the Knowledge Puzzle, an ontology-based platform designed to facilitate domain\ud knowledge acquisition from textual documents for knowledge-based systems. First, the\ud Knowledge Puzzle Platform performs an automatic generation of a domain ontology from documents’\ud content through natural language processing and machine learning technologies. Second,\ud it employs a new content model, the Knowledge Puzzle Content Model, which aims to model\ud learning material from annotated content. Annotations are performed semi-automatically based\ud on IBM’s Unstructured Information Management Architecture and are stored in an Organizational\ud memory (OM) as knowledge fragments. The organizational memory is used as a knowledge\ud base for a training environment (an Intelligent Tutoring System or an e-Learning environment).\ud The main objective of these annotations is to enable the automatic aggregation of Learning\ud Knowledge Objects (LKOs) guided by instructional strategies, which are provided through\ud SWRL rules. Finally, a methodology is proposed to generate SCORM-compliant learning objects\ud from these LKOs

    Assessing and Improving Domain Knowledge Representation in DBpedia

    Get PDF
    With the development of knowledge graphs and the billions of triples generated on the Linked Data cloud, it is paramount to ensure the quality of data. In this work, we focus on one of the central hubs of the Linked Data cloud, DBpedia. In particular, we assess the quality of DBpedia for domain knowledge representation. Our results show that DBpedia has still much room for improvement in this regard, especially for the description of concepts and their linkage with the DBpedia ontology. Based on this analysis, we leverage open relation extraction and the information already available on DBpedia to partly correct the issue, by providing novel relations extracted from Wikipedia abstracts and discovering entity types using the dbo:type predicate. Our results show that open relation extraction can indeed help enrich domain knowledge representation in DBpedia

    Light-Weight Ontology Alignment using Best-Match Clone Detection

    Get PDF
    Abstract-Ontologies are a key component of the Semantic Web, providing a common basis for representing and exchanging domain meaning in web documents and resources. Ontology alignment is the problem of relating the elements of two formal ontologies for a semantic domain, in order to identify common concepts and relationships represented using different terminology or language, and thus allow meaningful communication and exchange of documents and resources represented using different ontologies for the same domain. Many algorithms have been proposed for ontology alignment, each with their own strengths and weaknesses. The problem is in many ways similar to nearmiss clone detection: while much of the description of concepts in two ontologies may be similar, there can be differences in structure or vocabulary that make similarity detection challenging. Based on our previous work extending clone detection to modelling languages such as WSDL using contextualization, in this work we apply near-miss clone detection to the problem of ontology alignment, and use the new notion of "best-match" clone detection to achieve results similar to many existing ontology alignment algorithms when applied to standard benchmarks

    Towards technological approaches for concept maps mining from text

    Get PDF
    ABSTRACT: Concept maps are resources for the representation and construction of knowledge. They allow showing, through concepts and relationships, how knowledge about a subject is organized. Technological advances have boosted the development of approaches for the automatic construction of a concept map, to facilitate and provide the benefits of that resource more broadly. Due to the need to better identify and analyze the functionalities and characteristics of those approaches, we conducted a detailed study on technological approaches for automatic construction of concept maps published between 1994 and 2016 in the IEEE Xplore, ACM and Elsevier Science Direct data bases. From this study, we elaborate a categorization defined on two perspectives, Data Source and Graphic Representation, and fourteen categories. That study collected 30 relevant articles, which were applied to the proposed categorization to identify the main features and limitations of each approach. A detailed view on these approaches, their characteristics and techniques are presented enabling a quantitative analysis. In addition, the categorization has given us objective conditions to establish new specification requirements for a new technological approach aiming at concept maps mining from texts

    MLMLM: link prediction with mean likelihood masked language model

    Get PDF
    ABSTRACT: Knowledge Bases (KBs) are easy to query, verifiable, and interpretable. They however scale with man-hours and high-quality data. Masked Language Models (MLMs), such as BERT, scale with computing power as well as unstructured raw text data. The knowledge contained within these models is however not directly interpretable. We propose to perform link prediction with MLMs to address both the KBs scalability issues and the MLMs interpretability issues. By committing the knowledge embedded in MLMs to a KB, it becomes interpretable. To do that we introduce MLMLM, Mean Likelihood Masked Language Model, an approach comparing the mean likelihood of generating the different entities to perform link prediction in a tractable manner. We obtain State of the Art (SotA) results on the WN18RR dataset and SotA results on the Precision@1 metric on the WikidataM5 inductive and transductive setting. We also obtain convincing results on link prediction on previously unseen entities, making MLMLM a suitable approach to introducing new entities to a KB

    Étude et réalisation d'un agent pédagogique explicatif

    Full text link
    Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal
    • …
    corecore